body .main-container {
max-width: 100% !important;
width: 100% !important;
}
body {
max-width: 100% !important;
}
body, td {
font-size: 16px;
}
code.r{
font-size: 14px;
}
pre {
font-size: 14px
}knitr::opts_chunk$set(message = FALSE,
warning = FALSE,
echo = TRUE,
include = TRUE,
fig.align = "center",
out.width = "100%"
)
set.seed(1982)library(INLA)
library(here)Consider the AR(1) process
\[ u_1 \sim \text{N}(0, (\tau_u(1-\rho^2))^{-1}), \qquad u_t = \rho u_{t-1} + \epsilon_t, \qquad \epsilon_t\sim \text{N}(0, \tau_u^{-1} = \sigma_u^2) \qquad |\rho|<1\qquad \text{or}\qquad u_t - \rho u_{t-1} \sim \text{N}(0, \tau_u^{-1} = \sigma_u^2) \]
\[ E(u_t) = 0,\qquad V(u_t) = \gamma(0) = (\tau_u(1-\rho^2))^{-1}, \qquad \gamma(h) = \rho^{|h|}(\tau_u(1-\rho^2))^{-1} \]
The marginal precision is \(\kappa = \tau_u(1-\rho^2)\), which is just \(\dfrac{1}{V(u_t)}\). The marginal variance is just the variance of the process!. Internally, INLA defines \(\theta_1 = \log(\kappa)\) and \(\theta_2 = \log\left(\dfrac{1+\rho}{1-\rho}\right)\).
The default prior for \(\theta_1\) is loggamma with parameters 1 and 5e-05, i.e., the prior for \(\kappa\) is gamma. The default prior for \(\theta_2\) is normal with parameters 0 and 0.15. The prior for \(\rho\) is \(\pi(\rho) = \pi(\theta_2)\left|\dfrac{d\theta_2}{d\rho}\right| = \dfrac{\sqrt{0.15}}{\sqrt{2\pi}}\exp\left(-\dfrac{0.15}{2}\left[\log\left(\dfrac{1+\rho}{1-\rho}\right)\right]^2\right)\dfrac{2}{1-\rho^2}\)
# temp = read.csv("data/harmonised-unemployment-rates-mo.csv")
n = nrow(temp)-1
data = data.frame(y = temp[1:n,2], t=1:n)
dates <- temp[1:n,1]
plot(data$t, data$y, lwd=2, pch=19,
xlab='month', ylab='Unemployment Rates')
lines(data$t,data$y)
abline(h=2*(-8:9), lty=2, col=gray(.5))\[ y_i = \beta_0 + u_i + \epsilon_i, \qquad \epsilon_i\sim \text{N}(0, \tau_\epsilon^{-1} = \sigma_\epsilon^2), \qquad u_i \sim \text{N}(0, \kappa^{-1}), \qquad \kappa = \tau_u(1-\rho^2) \]
family = "gaussian"
formula1 <- y ~ f(t, model = 'ar1')
res1 <- inla(formula = formula1,
data = data,
family = family)
summary(res1)## Time used:
## Pre = 0.28, Running = 0.163, Post = 0.0267, Total = 0.47
## Fixed effects:
## mean sd 0.025quant 0.5quant 0.975quant mode kld
## (Intercept) 8.545 0.705 7.048 8.562 9.928 8.558 0
##
## Random effects:
## Name Model
## t AR1 model
##
## Model hyperparameters:
## mean sd 0.025quant 0.5quant
## Precision for the Gaussian observations 2.20e+04 2.37e+04 1463.583 1.45e+04
## Precision for t 2.73e-01 9.00e-02 0.129 2.63e-01
## Rho for t 8.89e-01 3.60e-02 0.807 8.93e-01
## 0.975quant mode
## Precision for the Gaussian observations 8.49e+04 3995.248
## Precision for t 4.77e-01 0.243
## Rho for t 9.48e-01 0.900
##
## Marginal log-Likelihood: -199.90
## is computed
## Posterior summaries for the linear predictor and the fitted values are computed
## (Posterior marginals needs also 'control.compute=list(return.marginals.predictor=TRUE)')
plot(res1$summary.random$t[ ,"mean"] + res1$summary.fixed$mean[1], ylab = "fitting result", type = "l")
points(data$y, col="blue")Remarks
res1$summary.random$t[ ,"mean"] corresponds to \(u_i\)res1$internal.summary.hyperpar[2,] |> head() |> knitr::kable(format = "markdown", caption = "Internal hyperparameters")| mean | sd | 0.025quant | 0.5quant | 0.975quant | mode | |
|---|---|---|---|---|---|---|
| Log precision for t | -1.353642 | 0.3335297 | -2.049611 | -1.336975 | -0.7397217 | -1.27503 |
hyper2 = list(theta1=list(initial=0.5, fixed=T)) # this fixes theta1 to 0.5 so no inference is done
formula2 <- y~ f(t,model='ar1',hyper=hyper2)
res2 <- inla(formula=formula2,data=data,family=family)
summary(res2)## Time used:
## Pre = 0.131, Running = 0.15, Post = 0.00857, Total = 0.289
## Fixed effects:
## mean sd 0.025quant 0.5quant 0.975quant mode kld
## (Intercept) 8.65 0.24 8.175 8.651 9.12 8.651 0
##
## Random effects:
## Name Model
## t AR1 model
##
## Model hyperparameters:
## mean sd 0.025quant 0.5quant
## Precision for the Gaussian observations 2.089 0.579 1.153 2.021
## Rho for t 0.853 0.034 0.779 0.857
## 0.975quant mode
## Precision for the Gaussian observations 3.413 1.899
## Rho for t 0.911 0.862
##
## Marginal log-Likelihood: -236.31
## is computed
## Posterior summaries for the linear predictor and the fitted values are computed
## (Posterior marginals needs also 'control.compute=list(return.marginals.predictor=TRUE)')
plot(data$y, col="blue",
ylab="fitting result")
lines(res2$summary.random$t[ ,"mean"]+res2$summary.fixed$mean[1])family <- "gaussian"
hyper3 <- list(theta1 = list(prior="pc.prec", param=c(0.06, 0.008)),
theta2 = list(prior="pc.cor1", param=c(0.9, 0.9)) )
formula3 <- y~ f(t,model='ar1',hyper=hyper3)
res3 <- inla(formula=formula3,data=data,family=family,
control.predictor = list(compute=T))
summary(res3)## Time used:
## Pre = 0.122, Running = 0.158, Post = 0.00897, Total = 0.289
## Fixed effects:
## mean sd 0.025quant 0.5quant 0.975quant mode kld
## (Intercept) 8.644 0.247 8.155 8.645 9.128 8.645 0
##
## Random effects:
## Name Model
## t AR1 model
##
## Model hyperparameters:
## mean sd 0.025quant 0.5quant
## Precision for the Gaussian observations 6.635 4.174 2.065 5.567
## Precision for t 1.064 0.173 0.753 1.054
## Rho for t 0.801 0.035 0.723 0.805
## 0.975quant mode
## Precision for the Gaussian observations 17.673 4.040
## Precision for t 1.432 1.040
## Rho for t 0.861 0.812
##
## Marginal log-Likelihood: -294.50
## is computed
## Posterior summaries for the linear predictor and the fitted values are computed
## (Posterior marginals needs also 'control.compute=list(return.marginals.predictor=TRUE)')
plot(data$y, col="blue",
ylab="fitting result")
lines(res3$summary.random$t[ ,"mean"]+res3$summary.fixed$mean[1])rho.df = list(rho.intern = res3$internal.marginals.hyperpar$`Rho_intern for t`, rho = res3$marginals.hyperpar$`Rho for t`)
# posterior of sigma
marginal.posterior.rho = inla.tmarginal(fun = function(x) x, marginal = rho.df$rho)
marginal.posterior.rho.internal = inla.tmarginal(fun = function(x) 2 * exp(x) / (1 + exp(x)) - 1, marginal = rho.df$rho.intern)
plot(marginal.posterior.rho, type = "p", xlab = expression(rho), ylab = "Probability density", col = "red")
lines(marginal.posterior.rho.internal, col = "blue")
legend("topright", pch = 1, col = c("red", "blue"), legend = c("rho", "rho from intern"))# posterior of sigma
marginal.posterior.rho.intern= inla.tmarginal(fun = function(x) x, marginal = rho.df$rho.intern)
marginal.posterior.rho.intern.from.rho = inla.tmarginal(fun = function(x) log((1 + x) / (1 - x)), marginal = rho.df$rho)
plot(marginal.posterior.rho.intern, type = "p", xlab = "", ylab = "Probability density", col = "red")
lines(marginal.posterior.rho.intern.from.rho, col = "blue")
legend("topright", pch = 1, col = c("red", "blue"), legend = c("rho intern", "rho intern from rho"))Observe the normality. This is because the prior for \(\theta_2\) is normal.